Goto

Collaborating Authors

 defensive ai


How businesses can safeguard against rogue AI - Raconteur

#artificialintelligence

Three decades after a US university student called Robert Tappan Morris was convicted of launching the first widely known malware attack on the internet, cybercrime has become big business, costing the global economy an estimated £2.1m a minute. Internet service provider Beaming reports that cybercriminals are launching increasingly sophisticated attacks on an "unprecedented scale". The pandemic has exacerbated the situation because it has prompted a sharp rise in remote working, which has enabled them to target vulnerabilities in domestic internet connections to attack corporate systems. In 2020, the average UK business faced 686,961 attempts to breach its systems – 20% up on the previous year's figure – according to Beaming. That equates to an attack every 46 seconds.


Dear enterprise IT: Cybercriminals use AI too

#artificialintelligence

In a 2017 Deloitte survey, only 42% of respondents considered their institutions to be extremely or very effective at managing cybersecurity risk. The pandemic has certainly done nothing to alleviate these concerns. Despite increased IT security investments companies made in 2020 to deal with distributed IT and work-from-home challenges, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks, according to IDG. Unfortunately, the cybersecurity landscape is poised to become more treacherous with the emergence of AI-powered cyberattacks, which could enable cybercriminals to fly under the radar of conventional, rules-based detection tools. For example, when AI is thrown into the mix, "fake email" could become nearly indistinguishable from trusted contact messages.


IT security: when AI fights against AI - Market Research Telecast

#artificialintelligence

Artificial intelligence is also on the advance in IT security. According to a survey of 300 managers, 96 percent reported preparations in their companies for AI-supported IT attacks. In doing so, they partly rely on the help of "defensive AI". The survey was carried out with the assistance of the AI cybersecurity provider Darktrace. A survey of around 200 IT managers in medium-sized companies came to a more differentiated result.


Survey finds 96% of execs are considering adopting 'defensive AI' against cyberattacks

#artificialintelligence

Register for a free or VIP pass today. "Offensive AI" will enable cybercriminals to direct attacks against enterprises while flying under the radar of conventional, rules-based detection tools. That's according to a new survey published by MIT Technology Review Insights and Darktrace, which found that more than half of business leaders believe security strategies based on human-led responses are failing. The MIT and Darktrace report surveyed more than 300 C-level executives, directors, and managers worldwide to understand how they perceive the cyberthreats they're up against. A high percentage of respondents (55%) said traditional security solutions can't anticipate new AI-driven attacks, while 96% said they're adopting "defensive AI" to remedy this.


The battle of algorithms: Uncovering offensive AI

MIT Technology Review

As machine-learning applications move into the mainstream, a new era of cyber threat is emerging--one that uses offensive artificial intelligence (AI) to supercharge attack campaigns. Offensive AI allows attackers to automate reconnaissance, craft tailored impersonation attacks, and even self-propagate to avoid detection. Security teams can prepare by turning to defensive AI to fight back--using autonomous cyber defense that learns on the job to detect and respond to even the most subtle indicators of an attack, no matter where it appears. MIT Technology Review recently sat down with experts from Darktrace--Marcus Fowler, director of strategic threat, and Max Heinemeyer, director of threat hunting--to discuss the current and emerging applications of offensive AI, defensive AI, and the ongoing battle of algorithms between the two. Sign up to watch the webcast.


War of the AI algorithms: the next evolution of cyber attacks

#artificialintelligence

It has now been over three decades since the Morris Worm infected an estimated 10% of the 60,000 computers that were online in 1988. It was the personal malware project of a Harvard graduate named Robert Tappan Morris, and is now widely deemed to be the world's first cyber-attack. Fast forward to today, and cyber attacks now stand among natural disasters and climate change in the World Economic Forum's annual list of global society's gravest threats. As businesses, schools, hospitals, and pretty much every other thread in the fabric of society have embraced the internet, cyber crime has transformed from an academic research project into a global marketplace of professional hacking services, and on the geopolitical stage, governments have turned to hyper-advanced cyber attack tools as a means of causing physical damage and disruption to their adversaries' critical infrastructure. The National Cyber Security Centre (NCSC) has detected a rise in cyber attacks targeting academic institutions, including schools and universities.


Reducing the Risks Posed by Artificial Intelligence

#artificialintelligence

Artificial Intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Like humans, they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. However, it can also be used for good and should become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.


Reducing the Risks Posed by Artificial Intelligence

#artificialintelligence

Artificial Intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Like humans, they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. However, it can also be used for good and should become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.


Fear and loathing in artificial intelligence

#artificialintelligence

But to thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. This is the conclusion of a new report from the Information Security Forum aimed at helping business and security leaders to better understand what AI is, identify the information risks posed and how to mitigate them, and explore opportunities around using AI in defense. "AI is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior -- and like humans they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous," says Steve Durbin, managing director of the ISF.


Artificial Intelligence, Cybersecurity, and Common Sense

#artificialintelligence

Spectacular recent developments in Artificial Intelligence (AI) are feeding many fantasies in the world of cybersecurity. Almost everything can be heard on the topic, from the looming obsolescence of even the best defence solutions to an open war between AIs developed by various tech powers – including states. It often feels very complicated for executives to prepare themselves for what's ahead. Experts agree that AI should eventually benefit both attackers and their victims – progresses made by a malicious AI necessarily pushing defensive AI to better themselves, and vice-versa. The future equilibrium of this cat-and-mouse game is however hardly consensual, and it seems unclear which camp will lead the race in the end.